High-Level Semantic Feature Matters Few-Shot Unsupervised Domain Adaptation

نویسندگان

چکیده

In few-shot unsupervised domain adaptation (FS-UDA), most existing methods followed the learning (FSL) to leverage low-level local features (learned from conventional convolutional models, e.g., ResNet) for classification. However, goal of FS-UDA and FSL are relevant yet distinct, since aims classify samples in target rather than source domain. We found that insufficient FS-UDA, which could introduce noise or bias against classification, not be used effectively align domains. To address above issues, we aim refine more discriminative Thus, propose a novel task-specific semantic feature method (TSECS) FS-UDA. TSECS learns high-level image-to-class similarity measurement. Based on features, design cross-domain self-training strategy few labeled build classifier addition, minimize KL divergence distributions between domains shorten distance two Extensive experiments DomainNet show proposed significantly outperforms SOTA by large margin (i.e., ~10%).

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Few-Shot Adversarial Domain Adaptation

This work provides a framework for addressing the problem of supervised domain adaptation with deep models. The main idea is to exploit adversarial learning to learn an embedded subspace that simultaneously maximizes the confusion between two domains while semantically aligning their embedding. The supervised setting becomes attractive especially when there are only a few target data samples th...

متن کامل

Feature-Level Domain Adaptation

Domain adaptation is the supervised learning setting in which the training and test data are sampled from different distributions: training data is sampled from a source domain, whilst test data is sampled from a target domain. This paper proposes and studies an approach, called feature-level domain adaptation (flda), that models the dependence between the two domains by means of a feature-leve...

متن کامل

Unsupervised Domain Adaptation with Feature Embeddings

Representation learning is the dominant technique for unsupervised domain adaptation, but existing approaches often require the specification of “pivot features” that generalize across domains, which are selected by task-specific heuristics. We show that a novel but simple feature embedding approach provides better performance, by exploiting the feature template structure common in NLP problems.

متن کامل

Adversarial Feature Augmentation for Unsupervised Domain Adaptation

Recent works showed that Generative Adversarial Networks (GANs) can be successfully applied in unsupervised domain adaptation, where, given a labeled source dataset and an unlabeled target dataset, the goal is to train powerful classifiers for the target samples. In particular, it was shown that a GAN objective function can be used to learn target features indistinguishable from the source ones...

متن کامل

Unsupervised Multi-Domain Adaptation with Feature Embeddings

Representation learning is the dominant technique for unsupervised domain adaptation, but existing approaches have two major weaknesses. First, they often require the specification of “pivot features” that generalize across domains, which are selected by taskspecific heuristics. We show that a novel but simple feature embedding approach provides better performance, by exploiting the feature tem...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i9.26306